Does Learning Mean a Decrease in Entropy?
نویسنده
چکیده
Shannon introduced his concept of entropy as a measure of the uncertainty as to the outcome of some event. If this is the case, then learning about a system should be reflected by a real decrease in the entropy of the variable that is the target of learning. I examined a robot that learned to navigate in a simple arena, and analyzed the entropy of the variables for action, state, and reward. Entropy does indeed decrease in the initial stages of learning, but that the introduction of new options complicates the scenario and can in fact lead to a subsequent increase in entropy. I examine the implications from this finding and propose a model for general learning.
منابع مشابه
Does Learning Imply a Decrease in the Entropy of Behavior?
Shannon’s information entropy measures of the uncertainty of an event’s outcome. If learning about a system reflects a decrease in uncertainty, then a plausible intuition is that learning should be accompanied by a decrease in the entropy of the organism’s actions and/or perceptual states. To address whether this intuition is valid, I examined an artificial organism – a simple robot – that lear...
متن کاملCycle Time Optimization of Processes Using an Entropy-Based Learning for Task Allocation
Cycle time optimization could be one of the great challenges in business process management. Although there is much research on this subject, task similarities have been paid little attention. In this paper, a new approach is proposed to optimize cycle time by minimizing entropy of work lists in resource allocation while keeping workloads balanced. The idea of the entropy of work lists comes fr...
متن کاملOn the Smoothed Minimum Error Entropy Criterion
Recent studies suggest that the minimum error entropy (MEE) criterion can outperform the traditional mean square error criterion in supervised machine learning, especially in nonlinear and non-Gaussian situations. In practice, however, one has to estimate the error entropy from the samples since in general the analytical evaluation of error entropy is not possible. By the Parzen windowing appro...
متن کاملDetermination of weight vector by using a pairwise comparison matrix based on DEA and Shannon entropy
The relation between the analytic hierarchy process (AHP) and data envelopment analysis (DEA) is a topic of interest to researchers in this branch of applied mathematics. In this paper, we propose a linear programming model that generates a weight (priority) vector from a pairwise comparison matrix. In this method, which is referred to as the E-DEAHP method, we consider each row of the pairwise...
متن کاملEstimation of the Entropy Rate of ErgodicMarkov Chains
In this paper an approximation for entropy rate of an ergodic Markov chain via sample path simulation is calculated. Although there is an explicit form of the entropy rate here, the exact computational method is laborious to apply. It is demonstrated that the estimated entropy rate of Markov chain via sample path not only converges to the correct entropy rate but also does it exponential...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009